嵌入在自主系统中的机器学习(ML)组件的增加使用 - 所谓的启用学习的系统(LES) - 导致压力需要确保其功能安全性。至于传统的功能安全,在工业和学术界的新兴共识是为此目的使用保证案例。通常,保证案例支持可靠性的支持权,支持安全性,并且可以被视为组织争论和从安全分析和可靠性建模活动产生的证据的结构化方式。虽然这些保证活动传统上由基于协商一致的标准,但由于ML模型的特点和设计,在安全关键应用中,LES构成了新的挑战。在本文中,我们首先向LES提出了一种强调定量方面的总体保证框架,例如,打破系统级安全目标与可靠性指标中所述的组件级要求和支持索赔。然后,我们向ML分类器介绍一种新的模型 - 不可能可靠性评估模型(RAM),该分类器利用操作简档和鲁棒性验证证据。我们讨论了模型假设以及评估我们RAM揭示的ML可靠性的固有挑战,并提出了实用的解决方案。还基于RAM开发了较低ML组件级的概率安全争论。最后,为了评估和展示我们的方法,我们不仅对合成/基准数据集进行实验,还展示了我们对模拟中自动水下车辆的综合案例研究的方法。
translated by 谷歌翻译
Recent work has shown that fine-tuning large pre-trained language models on a collection of tasks described via instructions, a.k.a. instruction-tuning, improves their zero and few-shot generalization to unseen tasks. However, there is a limited understanding of the performance trade-offs of different decisions made during the instruction-tuning process. These decisions include the scale and diversity of the instruction-tuning benchmark, different task sampling strategies, fine-tuning with and without demonstrations, training using specialized datasets for reasoning and dialogue, and finally, the fine-tuning objectives themselves. In this paper, we characterize the effect of instruction-tuning decisions on downstream task performance when scaling both model and benchmark sizes. To this end, we create OPT-IML Bench: a large benchmark for Instruction Meta-Learning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations: to tasks from fully held-out categories, to held-out tasks from seen categories, and to held-out instances from seen tasks. Through the lens of this framework, we first present insights about instruction-tuning decisions as applied to OPT-30B and further exploit these insights to train OPT-IML 30B and 175B, which are instruction-tuned versions of OPT. OPT-IML demonstrates all three generalization abilities at both scales on four different evaluation benchmarks with diverse tasks and input formats -- PromptSource, FLAN, Super-NaturalInstructions, and UnifiedSKG. Not only does it significantly outperform OPT on all benchmarks but is also highly competitive with existing models fine-tuned on each specific benchmark. We release OPT-IML at both scales, together with the OPT-IML Bench evaluation framework.
translated by 谷歌翻译
We seek methods to model, control, and analyze robot teams performing environmental monitoring tasks. During environmental monitoring, the goal is to have teams of robots collect various data throughout a fixed region for extended periods of time. Standard bottom-up task assignment methods do not scale as the number of robots and task locations increases and require computationally expensive replanning. Alternatively, top-down methods have been used to combat computational complexity, but most have been limited to the analysis of methods which focus on transition times between tasks. In this work, we study a class of nonlinear macroscopic models which we use to control a time-varying distribution of robots performing different tasks throughout an environment. Our proposed ensemble model and control maintains desired time-varying populations of robots by leveraging naturally occurring interactions between robots performing tasks. We validate our approach at multiple fidelity levels including experimental results, suggesting the effectiveness of our approach to perform environmental monitoring.
translated by 谷歌翻译
As demand for large corpora increases with the size of current state-of-the-art language models, using web data as the main part of the pre-training corpus for these models has become a ubiquitous practice. This, in turn, has introduced an important challenge for NLP practitioners, as they are now confronted with the task of developing highly optimized models and pipelines for pre-processing large quantities of textual data, which implies, effectively classifying and filtering multilingual, heterogeneous and noisy data, at web scale. One of the main components of this pre-processing step for the pre-training corpora of large language models, is the removal of adult and harmful content. In this paper we explore different methods for detecting adult and harmful of content in multilingual heterogeneous web data. We first show how traditional methods in harmful content detection, that seemingly perform quite well in small and specialized datasets quickly break down when confronted with heterogeneous noisy web data. We then resort to using a perplexity based approach but with a twist: Instead of using a so-called "clean" corpus to train a small language model and then use perplexity so select the documents with low perplexity, i.e., the documents that resemble this so-called "clean" corpus the most. We train solely with adult and harmful textual data, and then select the documents having a perplexity value above a given threshold. This approach will virtually cluster our documents into two distinct groups, which will greatly facilitate the choice of the threshold for the perplexity and will also allow us to obtain higher precision than with the traditional classification methods for detecting adult and harmful content.
translated by 谷歌翻译
Any strategy used to distribute a robot ensemble over a set of sequential tasks is subject to inaccuracy due to robot-level uncertainties and environmental influences on the robots' behavior. We approach the problem of inaccuracy during task allocation by modeling and controlling the overall ensemble behavior. Our model represents the allocation problem as a stochastic jump process and we regulate the mean and variance of such a process. The main contributions of this paper are: Establishing a structure for the transition rates of the equivalent stochastic jump process and formally showing that this approach leads to decoupled parameters that allow us to adjust the first- and second-order moments of the ensemble distribution over tasks, which gives the flexibility to decrease the variance in the desired final distribution. This allows us to directly shape the impact of uncertainties on the group allocation over tasks. We introduce a detailed procedure to design the gains to achieve the desired mean and show how the additional parameters impact the covariance matrix, which is directly associated with the degree of task allocation precision. Our simulation and experimental results illustrate the successful control of several robot ensembles during task allocation.
translated by 谷歌翻译
Scaling up language models has led to unprecedented performance gains, but little is understood about how the training dynamics change as models get larger. How do language models of different sizes learn during pre-training? Why do larger language models demonstrate more desirable behaviors? In this paper, we analyze the intermediate training checkpoints of differently sized OPT models (Zhang et al.,2022)--from 125M to 175B parameters--on next-token prediction, sequence-level generation, and downstream tasks. We find that 1) at a given perplexity and independent of model sizes, a similar subset of training tokens see the most significant reduction in loss, with the rest stagnating or showing double-descent behavior; 2) early in training, all models learn to reduce the perplexity of grammatical sequences that contain hallucinations, with small models halting at this suboptimal distribution and larger ones eventually learning to assign these sequences lower probabilities; 3) perplexity is a strong predictor of in-context learning performance on 74 multiple-choice tasks from BIG-Bench, and this holds independent of the model size. Together, these results show that perplexity is more predictive of model behaviors than model size or training computation.
translated by 谷歌翻译
Identifying named entities such as a person, location or organization, in documents can highlight key information to readers. Training Named Entity Recognition (NER) models requires an annotated data set, which can be a time-consuming labour-intensive task. Nevertheless, there are publicly available NER data sets for general English. Recently there has been interest in developing NER for legal text. However, prior work and experimental results reported here indicate that there is a significant degradation in performance when NER methods trained on a general English data set are applied to legal text. We describe a publicly available legal NER data set, called E-NER, based on legal company filings available from the US Securities and Exchange Commission's EDGAR data set. Training a number of different NER algorithms on the general English CoNLL-2003 corpus but testing on our test collection confirmed significant degradations in accuracy, as measured by the F1-score, of between 29.4\% and 60.4\%, compared to training and testing on the E-NER collection.
translated by 谷歌翻译
A new method for solving the wave equation is presented, called the learned Born series (LBS), which is derived from a convergent Born Series but its components are found through training. The LBS is shown to be significantly more accurate than the convergent Born series for the same number of iterations, in the presence of high contrast scatterers, while maintaining a comparable computational complexity. The LBS is able to generate a reasonable prediction of the global pressure field with a small number of iterations, and the errors decrease with the number of learned iterations.
translated by 谷歌翻译
许多参与者批评深度强化学习(DRL)算法在解决各种具有挑战性的强化学习(RL)问题方面已经取得了尖端的表现,包括具有高维连续状态和动作空间的复杂控制任务。尽管有广泛报道的成功,但现有的DRL算法经常遭受无效的勘探问题的困扰,从而导致学习稳定性和表现有限。为了解决这一限制,最近提出了几种集成DRL算法,以增强探索和稳定学习过程。但是,许多现有的合奏算法旨在单独训练每个基础学习者,而无需明确控制训练有素的基础学习者之间的协作。在本文中,我们提出了一种新技术,以基于多步集成方法来培训基础学习者的合奏。新的多步培训技术使我们能够为集合DRL开发一种新的层次结构培训算法,该算法通过显式的Inter-Learner参数共享来促进学习中的协作。理论上对我们的新算法的设计进行了验证。该算法在经验上也显示出在多个基准RL问题上的表现优于几种尖端的DRL算法。
translated by 谷歌翻译
随着人口的指数增长,至关重要的是保存自然资源,而不必损害足够的食物来养活所有人。这样做可以改善目前和后代的人的生计,健康和生态系统。可持续发展是联合国的范式,植根于食品,农作物,牲畜,森林,人口,甚至气体的排放。通过了解过去不同国家自然资源的总体使用,可以预测每个国家的需求。提出的解决方案包括使用统计回归模型实施机器学习系统,该模型可以预测将来在特定时期内每个国家 /地区短缺的顶级K产品。根据绝对误差和根平方误差的预测性能由于其低误差而显示出令人鼓舞的结果。该解决方案可以帮助组织和制造商了解满足全球需求所需的生产力和可持续性。
translated by 谷歌翻译